Goto

Collaborating Authors

 service configuration


Sandwich: Separating Prefill-Decode Compilation for Efficient CPU LLM Serving

Zhao, Juntao, Li, Jiuru, Wu, Chuan

arXiv.org Artificial Intelligence

Utilizing CPUs to serve large language models (LLMs) is a resource-friendly alternative to GPU serving. Existing CPU-based solutions ignore workload differences between the prefill and the decode phases of LLM inference, applying a static per-NUMA (Non-Uniform Memory Access) node model partition and utilizing vendor libraries for operator-level execution, which is suboptimal. We propose Sandwich, a hardware-centric CPU-based LLM serving engine that uses different execution plans for the prefill and decode phases and optimizes them separately. We evaluate Sandwich across diverse baselines and datasets on five CPU platforms, including x86 with AVX-2 and AVX-512, as well as ARM with NEON. Sandwich achieves an average 2.01x throughput improvement and 90% satisfactory time-to-first-token (TTFT) and time-per-output-token (TPOT) latencies with up to 3.40x lower requirements in single sequence serving, and significant improvement in Goodput in continuous-batching serving. The GEMM kernels generated by Sandwich outperform representative vendor kernels and other dynamic shape solutions, achieving performance comparable to static compilers with three orders of magnitude less kernel tuning costs.


Using DC/OS to Accelerate Data Science in the Enterprise - KDnuggets

#artificialintelligence

As a full-stack machine learning consultant that focuses on building and delivering new products to market, I've often found myself at the intersection of data science, data engineering and dev ops. So it has been with great interest that I've followed the rise of data science Platforms as a Service (PaaS). I recently set out to evaluate different Platforms as a Service (PaaS) and their potential to automate data science operations. I'm trying to find the best way for the book's readers to work through the examples. In my last book, Agile Data Science 2.0 (4.5 stars), I built my own platform for readers to run the code using bash scripts, the AWS CLI, jq, Vagrant and EC2.